Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.

Identifieur interne : 000211 ( Main/Exploration ); précédent : 000210; suivant : 000212

New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.

Auteurs : Chris Rhodes [Royaume-Uni] ; Richard Allmendinger [Royaume-Uni] ; Ricardo Climent [Royaume-Uni]

Source :

RBID : pubmed:33297582

Abstract

Interactive music uses wearable sensors (i.e., gestural interfaces-GIs) and biometric datasets to reinvent traditional human-computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice.

DOI: 10.3390/e22121384
PubMed: 33297582
PubMed Central: PMC7762429


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.</title>
<author>
<name sortKey="Rhodes, Chris" sort="Rhodes, Chris" uniqKey="Rhodes C" first="Chris" last="Rhodes">Chris Rhodes</name>
<affiliation wicri:level="4">
<nlm:affiliation>NOVARS Research Centre, University of Manchester, Manchester M13 9PL, UK.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>NOVARS Research Centre, University of Manchester, Manchester M13 9PL</wicri:regionArea>
<orgName type="university">Université de Manchester</orgName>
<placeName>
<settlement type="city">Manchester</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Grand Manchester</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Allmendinger, Richard" sort="Allmendinger, Richard" uniqKey="Allmendinger R" first="Richard" last="Allmendinger">Richard Allmendinger</name>
<affiliation wicri:level="4">
<nlm:affiliation>Alliance Manchester Business School, University of Manchester, Manchester M15 6PB, UK.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Alliance Manchester Business School, University of Manchester, Manchester M15 6PB</wicri:regionArea>
<orgName type="university">Université de Manchester</orgName>
<placeName>
<settlement type="city">Manchester</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Grand Manchester</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Climent, Ricardo" sort="Climent, Ricardo" uniqKey="Climent R" first="Ricardo" last="Climent">Ricardo Climent</name>
<affiliation wicri:level="4">
<nlm:affiliation>NOVARS Research Centre, University of Manchester, Manchester M13 9PL, UK.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>NOVARS Research Centre, University of Manchester, Manchester M13 9PL</wicri:regionArea>
<orgName type="university">Université de Manchester</orgName>
<placeName>
<settlement type="city">Manchester</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Grand Manchester</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2020">2020</date>
<idno type="RBID">pubmed:33297582</idno>
<idno type="pmid">33297582</idno>
<idno type="doi">10.3390/e22121384</idno>
<idno type="pmc">PMC7762429</idno>
<idno type="wicri:Area/Main/Corpus">000086</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">000086</idno>
<idno type="wicri:Area/Main/Curation">000086</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">000086</idno>
<idno type="wicri:Area/Main/Exploration">000086</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.</title>
<author>
<name sortKey="Rhodes, Chris" sort="Rhodes, Chris" uniqKey="Rhodes C" first="Chris" last="Rhodes">Chris Rhodes</name>
<affiliation wicri:level="4">
<nlm:affiliation>NOVARS Research Centre, University of Manchester, Manchester M13 9PL, UK.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>NOVARS Research Centre, University of Manchester, Manchester M13 9PL</wicri:regionArea>
<orgName type="university">Université de Manchester</orgName>
<placeName>
<settlement type="city">Manchester</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Grand Manchester</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Allmendinger, Richard" sort="Allmendinger, Richard" uniqKey="Allmendinger R" first="Richard" last="Allmendinger">Richard Allmendinger</name>
<affiliation wicri:level="4">
<nlm:affiliation>Alliance Manchester Business School, University of Manchester, Manchester M15 6PB, UK.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Alliance Manchester Business School, University of Manchester, Manchester M15 6PB</wicri:regionArea>
<orgName type="university">Université de Manchester</orgName>
<placeName>
<settlement type="city">Manchester</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Grand Manchester</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Climent, Ricardo" sort="Climent, Ricardo" uniqKey="Climent R" first="Ricardo" last="Climent">Ricardo Climent</name>
<affiliation wicri:level="4">
<nlm:affiliation>NOVARS Research Centre, University of Manchester, Manchester M13 9PL, UK.</nlm:affiliation>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>NOVARS Research Centre, University of Manchester, Manchester M13 9PL</wicri:regionArea>
<orgName type="university">Université de Manchester</orgName>
<placeName>
<settlement type="city">Manchester</settlement>
<region type="nation">Angleterre</region>
<region nuts="2" type="region">Grand Manchester</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Entropy (Basel, Switzerland)</title>
<idno type="eISSN">1099-4300</idno>
<imprint>
<date when="2020" type="published">2020</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Interactive music uses wearable sensors (i.e., gestural interfaces-GIs) and biometric datasets to reinvent traditional human-computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="PubMed-not-MEDLINE" Owner="NLM">
<PMID Version="1">33297582</PMID>
<DateRevised>
<Year>2021</Year>
<Month>02</Month>
<Day>24</Day>
</DateRevised>
<Article PubModel="Electronic">
<Journal>
<ISSN IssnType="Electronic">1099-4300</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>22</Volume>
<Issue>12</Issue>
<PubDate>
<Year>2020</Year>
<Month>Dec</Month>
<Day>07</Day>
</PubDate>
</JournalIssue>
<Title>Entropy (Basel, Switzerland)</Title>
<ISOAbbreviation>Entropy (Basel)</ISOAbbreviation>
</Journal>
<ArticleTitle>New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.</ArticleTitle>
<ELocationID EIdType="pii" ValidYN="Y">E1384</ELocationID>
<ELocationID EIdType="doi" ValidYN="Y">10.3390/e22121384</ELocationID>
<Abstract>
<AbstractText>Interactive music uses wearable sensors (i.e., gestural interfaces-GIs) and biometric datasets to reinvent traditional human-computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Rhodes</LastName>
<ForeName>Chris</ForeName>
<Initials>C</Initials>
<Identifier Source="ORCID">0000-0002-1899-8222</Identifier>
<AffiliationInfo>
<Affiliation>NOVARS Research Centre, University of Manchester, Manchester M13 9PL, UK.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Allmendinger</LastName>
<ForeName>Richard</ForeName>
<Initials>R</Initials>
<Identifier Source="ORCID">0000-0003-1236-3143</Identifier>
<AffiliationInfo>
<Affiliation>Alliance Manchester Business School, University of Manchester, Manchester M15 6PB, UK.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Climent</LastName>
<ForeName>Ricardo</ForeName>
<Initials>R</Initials>
<Identifier Source="ORCID">0000-0002-0484-8055</Identifier>
<AffiliationInfo>
<Affiliation>NOVARS Research Centre, University of Manchester, Manchester M13 9PL, UK.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<GrantList CompleteYN="Y">
<Grant>
<GrantID>2063473</GrantID>
<Agency>Engineering and Physical Sciences Research Council</Agency>
<Country></Country>
</Grant>
</GrantList>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2020</Year>
<Month>12</Month>
<Day>07</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>Switzerland</Country>
<MedlineTA>Entropy (Basel)</MedlineTA>
<NlmUniqueID>101243874</NlmUniqueID>
<ISSNLinking>1099-4300</ISSNLinking>
</MedlineJournalInfo>
<KeywordList Owner="NOTNLM">
<Keyword MajorTopicYN="N">HCI</Keyword>
<Keyword MajorTopicYN="N">Myo</Keyword>
<Keyword MajorTopicYN="N">Wekinator</Keyword>
<Keyword MajorTopicYN="N">gestural interfaces</Keyword>
<Keyword MajorTopicYN="N">gesture representation</Keyword>
<Keyword MajorTopicYN="N">interactive machine learning</Keyword>
<Keyword MajorTopicYN="N">interactive music</Keyword>
<Keyword MajorTopicYN="N">music composition</Keyword>
<Keyword MajorTopicYN="N">optimisation</Keyword>
<Keyword MajorTopicYN="N">performance gestures</Keyword>
</KeywordList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2020</Year>
<Month>10</Month>
<Day>14</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="revised">
<Year>2020</Year>
<Month>12</Month>
<Day>03</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2020</Year>
<Month>12</Month>
<Day>03</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2020</Year>
<Month>12</Month>
<Day>10</Day>
<Hour>1</Hour>
<Minute>3</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2020</Year>
<Month>12</Month>
<Day>11</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2020</Year>
<Month>12</Month>
<Day>11</Day>
<Hour>6</Hour>
<Minute>1</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">33297582</ArticleId>
<ArticleId IdType="pii">e22121384</ArticleId>
<ArticleId IdType="doi">10.3390/e22121384</ArticleId>
<ArticleId IdType="pmc">PMC7762429</ArticleId>
</ArticleIdList>
<ReferenceList>
<Reference>
<Citation>Comput Intell Neurosci. 2010;:267671</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">20300580</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>J Arthroplasty. 2018 Aug;33(8):2358-2361</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">29656964</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Sensors (Basel). 2018 May 18;18(5):</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">29783659</ArticleId>
</ArticleIdList>
</Reference>
<Reference>
<Citation>Front Psychol. 2019 Mar 04;10:344</Citation>
<ArticleIdList>
<ArticleId IdType="pubmed">30886595</ArticleId>
</ArticleIdList>
</Reference>
</ReferenceList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
<region>
<li>Angleterre</li>
<li>Grand Manchester</li>
</region>
<settlement>
<li>Manchester</li>
</settlement>
<orgName>
<li>Université de Manchester</li>
</orgName>
</list>
<tree>
<country name="Royaume-Uni">
<region name="Angleterre">
<name sortKey="Rhodes, Chris" sort="Rhodes, Chris" uniqKey="Rhodes C" first="Chris" last="Rhodes">Chris Rhodes</name>
</region>
<name sortKey="Allmendinger, Richard" sort="Allmendinger, Richard" uniqKey="Allmendinger R" first="Richard" last="Allmendinger">Richard Allmendinger</name>
<name sortKey="Climent, Ricardo" sort="Climent, Ricardo" uniqKey="Climent R" first="Ricardo" last="Climent">Ricardo Climent</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000211 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000211 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:33297582
   |texte=   New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:33297582" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021